32 research outputs found

    Comprehensive experimental performance analysis of DSR, AODV and DSDV routing protocol for different metrics values with predefined constraints

    Get PDF
    A Mobile Adhoc Network is a multi-hop self-configuring network without any fixed infrastructure. Due to mobility of nodes, dynamic topology and highly dynamic environment, designing and implementing stable routing in Mobile Ad-hoc Networking is a major challenge and a critical issue. This paper analyses the performance analysis of on demand routing protocol, Dynamic Source Routing (DSR), Adhoc on Demand Distance Vector Routing (AODV) and table driven protocol, Destination-Sequenced Distance Vectoring (DSDV) using a network simulator NS2. Different types of test scenario have designed with fixed number of nodes but varying mobility. Different performance metric values like, throughput, delay, normalized network load, end to end delay, dropped packets, packets delivery ratio have been observed. The experimental results have been analysed and recommendation based on the obtained results has been proposed about the significance of each protocol in different scenarios and situations. The simulation results show that both protocols are good in performance in their own categories. We believe that findings of this paper will help the researcher to find the best protocol under predefined condition with varied mobility. We believe that this research will help the researcher to identify and further investigate any particular metrics value of AODV, DSR and DSDV

    Future industry internet of things with aero-trust security

    Get PDF
    The emerging techniques, such as the fifth-generation communications (5G), Internet of Things (IoT), blockchain, artificial intelligence, etc., are operating in unison will drive the transformation of global business forward. The 5G technology is expected to unleash a massive IoT ecosystems by providing massive connectivity for huge number of IoT devices with faster data rate, ultra-low latency, and low-cost access. The 5G networks will be designed to bring the level of performance needed for massive IoT and will enable a perceived fully ubiquitous connected world. Meanwhile, the blockchain being promoted as the fundamental for new business model in Future IoT (FIoT). This paper attempts to provide a set of new directions and ideas for research in 5G/6G enabled IoT and new technique trends in IoT. The current IoT are facing a number of challenges, such as massive IoT devices access, network performances, security, standardization, and critical applications. This paper investigates new technologies, such as 5G, zero-trust, and blockchain will catalyse the innovation in IoT. Specifically, a zero-trust security architecture for FIoT is proposed and a blockchain-based device authentication in IoT environment (BasIoT) is proposed that can provide massive secure device access in FIoT

    Reliable data analysis through blockchain based crowdsourcing in mobile ad-hoc cloud

    Get PDF
    Mobile Ad-hoc Cloud (MAC) is the constellation of nearby mobile devices to serve the heavy computational needs of the resource constrained edge devices. One of the major challenges of MAC is to convince the mobile devices to offer their limited resources for the shared computational pool. Credit based rewarding system is considered as an effective way of incentivizing the arbitrary mobile devices for joining the MAC network and to earn the credits through computational crowdsourcing. The next challenge is to get the reliable computation as incentives attract the malicious devices to submit fake computational results for claiming their reward and we have used the blockchain based reputation system for identifying the malicious participants of MAC.This paper presents a malicious node identification algorithm integrated within the Iroha based permissioned blockchain. Iroha is a project of hyperledger which is focused on mobile devices and thus light-weight in nature. It is used for keeping the track of rewarding and reputation system driven by the malicious node detection algorithm. Experiments are conducted for evaluated the implemented test-bed and results show the effectiveness of algorithm in identifying the malicious devices and conducting the reliable data analysis through the blockchain based computational crowdsourcing in MAC

    A dynamic and scalable user-centric route planning algorithm based on Polychromatic Sets theory

    Get PDF
    Existing navigation services provide route options based on a single metric without considering user's preference. This results in the planned route not meeting the actual needs of users. In this paper, a personalized route planning algorithm is proposed, which can provide users with a route that meets their requirements. Based on the multiple properties of the road, the Polychromatic Sets (PS) theory is introduced into route planning. Firstly, a road properties description scheme based on the PS theory was proposed. By using this scheme, users' travel preferences can be quantified, and then personalized property combination schemes can be constructed according to these properties. Secondly, the idea of setting priority for road segments was utilized. Based on a user's travel preference, all the property combination schemes can be prioritized at relevant levels. Finally, based on the priority level, an efficient path planning scheme was proposed, in which priority is given to the highest road segments in the target direction. In addition, the system can constantly obtain real-time road information through mobile terminals, update road properties, and provide other users with more accurate road information and navigation services, so as to avoid crowded road segments without excessively increasing time consumption. Experiment results show that our algorithm can realize personalized route planning services without significantly increasing the travel time and distance. In addition, source code of the algorithm has been uploaded on GitHub for this algorithm to be used by other researchers

    Privacy preservation in artificial intelligence-enabled healthcare analytics

    Get PDF
    Emerging techniques such as the Internet of Things (IoT), machine learning, and artificial intelligence (AI) have revolutionized healthcare analytics by offering a multitude of significant benefits, including real-time process, enhanced data efficiency and optimization, enabling offline operation, fostering resilience, personalized and context-aware healthcare, etc. However, privacy concerns are indeed significant when it comes to edge computing and machine learning-enabled healthcare analytics. The training and validation of AI algorithms face considerable obstacles due to privacy concerns and stringent legal and ethical requirements associated with datasets. This work has proposed a healthcare data anonymization framework to address privacy concerns and ensure compliance with data regulations by enhancing privacy protection and anonymizing sensitive information in healthcare analytics, which can maintain a high level of privacy while minimizing any adverse effects on the analytics models. The experimental results have unequivocally showcased the effectiveness of the proposed solution

    On the Load Balancing of Edge Computing Resources for On-Line Video Delivery

    Get PDF
    Online video broadcasting platforms are distributed, complex, cloud oriented, scalable, micro-service-based systems that are intended to provide over-the-top and live content to audience in scattered geographic locations. Due to the nature of cloud VM hosting costs, the subscribers are usually served under limited resources in order to minimize delivery budget. However, operations including transcoding require high-computational capacity and any disturbance in supplying requested demand might result in quality of experience (QoE) deterioration. For any online delivery deployment, understanding user's QoE plays a crucial role for rebalancing cloud resources. In this paper, a methodology for estimating QoE is provided for a scalable cloud-based online video platform. The model will provide an adeptness guideline regarding limited cloud resources and relate computational capacity, memory, transcoding and throughput capability, and finally latency competence of the cloud service to QoE. Scalability and efficiency of the system are optimized through reckoning sufficient number of VMs and containers to satisfy the user requests even on peak demand durations with minimum number of VMs. Both horizontal and vertical scaling strategies (including VM migration) are modeled to cover up availability and reliability of intermediate and edge content delivery network cache nodes

    Information Fusion for 5G IoT: An Improved 3D Localisation Approach Using K-DNN and Multi-Layered Hybrid Radiomap

    Get PDF
    Indoor positioning is a core enabler for various 5G identity and context-aware applications requiring precise and real-time simultaneous localisation and mapping (SLAM). In this work, we propose a K-nearest neighbours and deep neural network (K-DNN) algorithm to improve 3D indoor positioning. Our implementation uses a novel data-augmentation concept for the received signal strength (RSS)-based fingerprint technique to produce a 3D fused hybrid. In the offline phase, a machine learning (ML) approach is used to train a model on a radiomap dataset that is collected during the offline phase. The proposed algorithm is implemented on the constructed hybrid multi-layered radiomap to improve the 3D localisation accuracy. In our implementation, the proposed approach is based on the fusion of the prominent 5G IoT signals of Bluetooth Low Energy (BLE) and the ubiquitous WLAN. As a result, we achieved a 91% classification accuracy in 1D and a submeter accuracy in 2D

    Multi-model running latency optimization in an edge computing paradigm

    Get PDF
    Recent advances in both lightweight deep learning algorithms and edge computing increasingly enable multiple model inference tasks to be conducted concurrently on resource-constrained edge devices, allowing us to achieve one goal collaboratively rather than getting high quality in each standalone task. However, the high overall running latency for performing multi-model inferences always negatively affects the real-time applications. To combat latency, the algorithms should be optimized to minimize the latency for multi-model deployment without compromising the safety-critical situation. This work focuses on the real-time task scheduling strategy for multi-model deployment and investigating the model inference using an open neural network exchange (ONNX) runtime engine. Then, an application deployment strategy is proposed based on the container technology and inference tasks are scheduled to different containers based on the scheduling strategies. Experimental results show that the proposed solution is able to significantly reduce the overall running latency in real-time applications
    corecore